Learning Optimal Advantage from Preferences and Mistaking It for Reward

Authors

  • W. Bradley Knox University of Texas at Austin Google
  • Stephane Hatgis-Kessell University of Texas at Austin
  • Sigurdur Orn Adalgeirsson Google
  • Serena Booth MIT
  • Anca Dragan University of California, Berkeley
  • Peter Stone University of Texas at Austin Sony AI
  • Scott Niekum University of Massachusetts Amherst

DOI:

https://doi.org/10.1609/aaai.v38i9.28870

Keywords:

HAI: Learning Human Values and Preferences, ML: Imitation Learning & Inverse Reinforcement Learning, ML: Reinforcement Learning

Abstract

We consider algorithms for learning reward functions from human preferences over pairs of trajectory segments, as used in reinforcement learning from human feedback (RLHF). Most recent work assumes that human preferences are generated based only upon the reward accrued within those segments, or their partial return. Recent work casts doubt on the validity of this assumption, proposing an alternative preference model based upon regret. We investigate the consequences of assuming preferences are based upon partial return when they actually arise from regret. We argue that the learned function is an approximation of the optimal advantage function, not a reward function. We find that if a specific pitfall is addressed, this incorrect assumption is not particularly harmful, resulting in a highly shaped reward function. Nonetheless, this incorrect usage of the approximation of the optimal advantage function is less desirable than the appropriate and simpler approach of greedy maximization of it. From the perspective of the regret preference model, we also provide a clearer interpretation of fine tuning contemporary large language models with RLHF. This paper overall provides insight regarding why learning under the partial return preference model tends to work so well in practice, despite it conforming poorly to how humans give preferences.

Published

2024-03-24

How to Cite

Knox, W. B., Hatgis-Kessell, S., Adalgeirsson, S. O., Booth, S., Dragan, A., Stone, P., & Niekum, S. (2024). Learning Optimal Advantage from Preferences and Mistaking It for Reward. Proceedings of the AAAI Conference on Artificial Intelligence, 38(9), 10066-10073. https://doi.org/10.1609/aaai.v38i9.28870

Issue

Section

AAAI Technical Track on Humans and AI